Prerequisits

This example use python API for XDMoD analytic framework. So you already need to have installed python version. In R you should install reticulate library and bound the python with XDMoD framework.

Quick Conda Install

Here is a quick tip on installation (tested in WSL ubuntu)

conda install requests
conda create -n xdmod-notebooks -y python=3.11 r=4.3
conda activate xdmod-notebooks
# base-notebook
conda install -y 'jupyterlab' 'notebook' 'jupyterhub' 'nbclassic'
# scipy-notebook
conda install -y  'altair' 'beautifulsoup4' 'bokeh' 'bottleneck' 'cloudpickle' \
    'conda-forge::blas=*=openblas' \
    'cython' 'dask' 'dill' 'h5py' 'ipympl' 'ipywidgets' 'jupyterlab-git' \
    'matplotlib-base' 'numba' 'numexpr' 'openpyxl' 'pandas' 'patsy' 'protobuf' \
    'pytables' 'scikit-image' 'scikit-learn' 'scipy' 'seaborn' 'sqlalchemy' \
    'statsmodels' 'sympy' 'widgetsnbextension' 'xlrd'
# r-notebook
conda install -y 'r-base' 'r-caret' 'r-crayon' 'r-devtools' 'r-e1071' \
    'r-forecast' 'r-hexbin' 'r-htmltools' 'r-htmlwidgets' 'r-irkernel' \
    'r-nycflights13' 'r-randomforest' 'r-rcurl' 'r-rmarkdown' 'r-rodbc' \
    'r-rsqlite' 'r-shiny' 'r-tidymodels' 'r-tidyverse' 'unixodbc'

# Other
conda install -y 'pymysql' 'requests' \
    'r-plotly' 'r-repr' 'r-irdisplay' 'r-pbdzmq' 'r-reticulate' 'r-cowplot' \
    'r-rjson' 'r-dotenv'

Install rstudio server:

wget -q "https://download2.rstudio.org/server/jammy/amd64/rstudio-server-2023.12.1-402-amd64.deb"
sudo dpkg -i rstudio-server-*-amd64.deb
rm rstudio-server-*-amd64.deb

# specify which version of r to use
echo "rsession-which-r=$(which R)" | sudo tee -a /etc/rstudio/rserver.conf

Alternatively you can install regular rstudio:

wget https://download1.rstudio.org/electron/jammy/amd64/rstudio-2023.12.1-402-amd64.deb
sudo dpkg -i rstudio-2023.12.1-402-amd64.deb
rm rstudio-*-amd64.deb

Finally, install xdmod-data python API:

pip install --upgrade 'xdmod-data>=1.0.0,<2.0.0' python-dotenv tabulate

Launch rserver at http://localhost:8787

Obtain an XDMoD-API token

You will need to get XDMoD-API token in order to access XDMoD Analytical Framework. Follow these instructions to obtain an API token. Write the token to ~/xdmod-data.env file. It will be loaded soon and you would not need to keep token in Rmarkdown file

cat > ~/xdmod-data.env << EOF
XDMOD_API_TOKEN=<my secret xdmod api tocken>
EOF

Use XDMoD Analytical Framework as Python API

Library Load

# load libraries
knitr::opts_chunk$set(echo = TRUE)

library(dotenv)
library(reticulate)
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.3     ✔ readr     2.1.4
## ✔ forcats   1.0.0     ✔ stringr   1.5.0
## ✔ ggplot2   3.4.4     ✔ tibble    3.2.1
## ✔ lubridate 1.9.3     ✔ tidyr     1.3.0
## ✔ purrr     1.0.2     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(plotly)
## 
## Attaching package: 'plotly'
## 
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## 
## The following object is masked from 'package:stats':
## 
##     filter
## 
## The following object is masked from 'package:graphics':
## 
##     layout
library(knitr)
library(rmarkdown)

# replace xdmod-notebooks with conda enviroment for python to use
use_condaenv("xdmod-notebooks")
# on windows ~ points to Documents, adjust accordently
load_dot_env(path.expand("~/xdmod-data.env"))

Using python chunks

First, initialize the XDMoD Data Warehouse. Run the code below to prepare for getting data from the XDMoD data warehouse at the given URL.

# This is a python chunk
# Initialize the XDMoD Data Warehouse
from xdmod_data.warehouse import DataWarehouse
dw = DataWarehouse('https://xdmod.access-ci.org')

Next, get the data. Run the code below to use the get_data() method to request data from XDMoD and load them into a DataFrame. This example gets the number of active users of ACCESS-allocated resources over a 4-month period. Each of the parameters of the method will be explained later in this notebook. Use with to create a runtime context; this is also explained later in this notebook.

# This is a python chunk
# Get data
with dw:
    data = dw.get_data(
        duration=('2023-01-01', '2023-04-30'),
        realm='Jobs',
        metric='Number of Users: Active',
    )

Slight modification to converted DataFrame: Tidyverse approach do not use row-names and convert text to date.

# Use data in R
df <- py$data %>% 
    rownames_to_column(var="Time") %>% # Move row names to column
    tibble() %>% # use newer data.frame
    mutate(Time=ymd(Time)) # convert character to date

df %>% paged_table()

Plot the data

# plot
ggplotly(
    ggplot(df, aes(x=Time,y=`Number of Users: Active`)) +
        geom_line()
)

Calling from R using reticulate

Now we can repeat same things using reticulate generated R interface, the steps are same: init warehouse, get-data, convert data and use it.

# Initialize the XDMoD Data Warehouse
warehouse <- import("xdmod_data.warehouse")
dw <- warehouse$DataWarehouse('https://xdmod.access-ci.org')
# Query data
with(dw,{
    data2 <- dw$get_data(
        duration=c('2023-01-01', '2023-04-30'),
        realm='Jobs',
        metric='Number of Users: Active',
)})
# Use data in R
df2 <-data2 %>% 
    rownames_to_column(var="Time") %>% # Move row names to column
    tibble() %>% # use newer data.frame
    mutate(Time=ymd(Time)) # convert character to date

df2
# plot
ggplotly(
    ggplot(df2, aes(x=Time,y=`Number of Users: Active`)) +
        geom_line()
)

Do further data processing

You can do further processing on the DataFrame to produce analysis and plots beyond those that are available in the XDMoD portal.

Run the code below to add a column for the day of the week:

df2$weekday <- factor(
    weekdays(df2$Time), 
    levels=c("Monday", "Tuesday", "Wednesday", "Thursday", "Friday", 
             "Saturday", "Sunday"))
df2

Run the code below to show a box plot of the data grouped by day of the week:

ggplotly(
    ggplot(df2, aes(x=weekday,y=`Number of Users: Active`)) +
        geom_boxplot()
)

Details of the get_data() method

Now that you have seen a basic example of using the get_data() method, read below for more details on how it works.

Wrap data warehouse calls in a runtime context XDMoD data is accessed over a network connection, which involves establishing connections and creating temporary resources. To ensure these connections and resources are cleaned up properly in spite of any runtime errors, you should call data warehouse methods within a runtime context by using Python’s with statement to wrap the execution of XDMoD queries, store the result, and execute any long running calculations outside of the runtime context, as in the template below. reticulate package of R reflect this layout.

with(dw, {
    # XDMoD queries would go here
    pass
    })
# Data processing would go here
pass

Default parameters

The get_data() method has a number of parameters; their default values are shown below, and the parameters are explained in more detail further below.

with(dw, {
    data <- dw$get_data(
        duration='Previous month',
        realm='Jobs',
        metric='CPU Hours: Total',
        dimension='None',
        filters=list(),
        dataset_type='timeseries',
        aggregation_unit='Auto'
    )})
data

Duration

The duration provides the time constraints of the data to be fetched from the XDMoD data warehouse.

As already seen, you can specify the duration as start and end times:

with(dw, {
    data <- dw$get_data(duration=c('2023-01-01', '2023-04-30'))})
data

You can instead specify the duration using a special string value; a list of the valid values can be obtained by calling the get_durations() method.

with(dw, {
    durations = dw$get_durations()
    })
durations
## [[1]]
## [1] "Yesterday"
## 
## [[2]]
## [1] "7 day"
## 
## [[3]]
## [1] "30 day"
## 
## [[4]]
## [1] "90 day"
## 
## [[5]]
## [1] "Month to date"
## 
## [[6]]
## [1] "Previous month"
## 
## [[7]]
## [1] "Quarter to date"
## 
## [[8]]
## [1] "Previous quarter"
## 
## [[9]]
## [1] "Year to date"
## 
## [[10]]
## [1] "Previous year"
## 
## [[11]]
## [1] "1 year"
## 
## [[12]]
## [1] "2 year"
## 
## [[13]]
## [1] "3 year"
## 
## [[14]]
## [1] "5 year"
## 
## [[15]]
## [1] "10 year"
## 
## [[16]]
## [1] "2024"
## 
## [[17]]
## [1] "2023"
## 
## [[18]]
## [1] "2022"
## 
## [[19]]
## [1] "2021"
## 
## [[20]]
## [1] "2020"
## 
## [[21]]
## [1] "2019"
## 
## [[22]]
## [1] "2018"

Realm

A realm is a category of data in the XDMoD data warehouse. You can use the describe_realms() method to get a DataFrame containing the list of available realms.

with(dw, {
    realms <- dw$describe_realms()
    })
realms

Metric

A metric is a statistic for which data exists in a given realm. You can use the describe_metrics(realm) method to get a DataFrame containing the list of valid metrics in the given realm. The realm must be passed in as a string.

with(dw,{
    metrics <- dw$describe_metrics('Jobs')})
metrics %>% kable(format="html")
label description
ACCESS Credit Equivalents Charged: Per Job (SU)

The average amount of ACCESS Credit Equivalents charged per compute job.<br/>

The ACCESS Credit Equivalent is a measure of how much compute time was used on each resource. One ACCESS Credit Equivalent is defined as one CPU Hour on SDSC Expanse (an AMD EPYC 7742 based compute resource). The ACCESS Credit Equivalent allows comparison between usage of node-allocated, core-allocated and GPU-allocated resources. It also allows a comparison between resources with different compute power per core. The <a href="https://allocations.access-ci.org/exchange_calculator"; target="_blank" rel="noopener noreferrer">ACCESS allocations exchange calculator</a> lists conversion rates between an ACCESS Credit Equivalent and a service unit on a resource.
ACCESS Credit Equivalents Charged: Total (SU)

The total amount of ACCESS Credit Equivalents charged.<br/>

The ACCESS Credit Equivalent is a measure of how much compute time was used on each resource. One ACCESS Credit Equivalent is defined as one CPU Hour on SDSC Expanse (an AMD EPYC 7742 based compute resource). The ACCESS Credit Equivalent allows comparison between usage of node-allocated, core-allocated and GPU-allocated resources. It also allows a comparison between resources with different compute power per core. The <a href="https://allocations.access-ci.org/exchange_calculator"; target="_blank" rel="noopener noreferrer">ACCESS allocations exchange calculator</a> lists conversion rates between an ACCESS Credit Equivalent and a service unit on a resource.
ACCESS Utilization (%) The percentage of the ACCESS obligation of a resource that has been utilized by ACCESS jobs.<br/><i> ACCESS Utilization:</i> The ratio of the total CPU hours consumed by ACCESS jobs over a given time period divided by the total CPU hours that the system is contractually required to provide to ACCESS during that period. It does not include non-ACCESS jobs.<br/>It is worth noting that this value is a rough estimate in certain cases where the resource providers don’t provide accurate records of their system specifications, over time.
Allocation Usage Rate (XD SU/Hour) The rate of ACCESS allocation usage in XD SUs per hour.
Allocation Usage Rate ACEs (SU/Hour) The rate of ACCESS allocation usage in ACCESS Credit Equivalents per hour.
CPU Hours: Per Job The average CPU hours (number of CPU cores x wall time hours) per ACCESS job.<br/>For each job, the CPU usage is aggregated. For example, if a job used 1000 CPUs for one minute, it would be aggregated as 1000 CPU minutes or 16.67 CPU hours.
CPU Hours: Total The total CPU hours (number of CPU cores x wall time hours) used by ACCESS jobs.<br/>For each job, the CPU usage is aggregated. For example, if a job used 1000 CPUs for one minute, it would be aggregated as 1000 CPU minutes or 16.67 CPU hours.
Job Size: Max (Core Count) The maximum size ACCESS job in number of cores.<br/><i>Job Size: </i>The total number of processor cores used by a (parallel) job.
Job Size: Min (Core Count) The minimum size ACCESS job in number of cores.<br/><i>Job Size: </i>The total number of processor cores used by a (parallel) job.
Job Size: Normalized (% of Total Cores) The percentage average size ACCESS job over total machine cores.<br><i>Normalized Job Size: </i>The percentage total number of processor cores used by a (parallel) job over the total number of cores on the machine.
Job Size: Per Job (Core Count) The average job size per ACCESS job.<br><i>Job Size: </i>The number of processor cores used by a (parallel) job.
Job Size: Weighted By ACEs (Core Count) The average job size weighted by charge in ACCESS Credit Equivalents (ACEs). Defined as <br><i>Average Job Size Weighted By ACEs: </i> sum(i = 0 to n){job i core count*job i charge in ACEs}/sum(i = 0 to n){job i charge in ACEs}
Job Size: Weighted By CPU Hours (Core Count) The average ACCESS job size weighted by CPU Hours. Defined as <br><i>Average Job Size Weighted By CPU Hours: </i> sum(i = 0 to n){ job i core count * job i cpu hours}/sum(i = 0 to n){job i cpu hours}
Job Size: Weighted By XD SUs (Core Count) The average ACCESS job size weighted by charge in XD SUs. Defined as <br><i>Average Job Size Weighted By XD SUs: </i> sum(i = 0 to n){job i core count*job i charge in xd sus}/sum(i = 0 to n){job i charge in xd sus}
NUs Charged: Per Job

The average amount of NUs charged per ACCESS job.<br/> <i>NU - Normalized Units: </i>Roaming allocations are awarded in XSEDE Service Units (SUs). 1 XSEDE SU is defined as one CPU-hour on a Phase-1 DTF cluster. For usage on a resource that is charged to a Roaming allocation, a normalization factor is applied. The normalization factor is based on the method historically used to calculate ‘Normalized Units’ (Cray X-MP-equivalent SUs), which derives from a resource’s performance on the HPL benchmark.<br/>

Specifically, 1 Phase-1 DTF SU = 21.576 NUs, and the XD SU conversion factor for a resource is calculated by taking its NU conversion factor and dividing it by 21.576. The standard formula for calculating a resource’s NU conversion factor is: (Rmax * 1000 / 191) / P where Rmax is the resource’s Rmax result on the HPL benchmark in Gflops and P is the number of processors used in the benchmark. In the absence of an HPL benchmark run, a conversion factor can be agreed upon, based on that of an architecturally similar platform and scaled according to processor performance differences.<br/>

Conversion to Roaming SUs is handled by the XSEDE central accounting system, and RPs are only required to report usage in local SUs for all allocations.<br/>

Defining an SU charge for specialized compute resources (such as visualization hardware) or non-compute resources (such as storage) is possible, but there is no XSEDE-wide policy for doing so.
NUs Charged: Total

The total amount of NUs charged by ACCESS jobs.<br/> <i>NU - Normalized Units: </i>Roaming allocations are awarded in XSEDE Service Units (SUs). 1 XSEDE SU is defined as one CPU-hour on a Phase-1 DTF cluster. For usage on a resource that is charged to a Roaming allocation, a normalization factor is applied. The normalization factor is based on the method historically used to calculate ‘Normalized Units’ (Cray X-MP-equivalent SUs), which derives from a resource’s performance on the HPL benchmark.<br/>

Specifically, 1 Phase-1 DTF SU = 21.576 NUs, and the XD SU conversion factor for a resource is calculated by taking its NU conversion factor and dividing it by 21.576. The standard formula for calculating a resource’s NU conversion factor is: (Rmax * 1000 / 191) / P where Rmax is the resource’s Rmax result on the HPL benchmark in Gflops and P is the number of processors used in the benchmark. In the absence of an HPL benchmark run, a conversion factor can be agreed upon, based on that of an architecturally similar platform and scaled according to processor performance differences.<br/>

Conversion to Roaming SUs is handled by the XSEDE central accounting system, and RPs are only required to report usage in local SUs for all allocations.<br/>

Defining an SU charge for specialized compute resources (such as visualization hardware) or non-compute resources (such as storage) is possible, but there is no XSEDE-wide policy for doing so.
Node Hours: Per Job The average node hours (number of nodes x wall time hours) per ACCESS job.
Node Hours: Total The total node hours (number of nodes x wall time hours) used by ACCESS jobs.
Number of Allocations: Active The total number of funded projects that used ACCESS resources.
Number of Institutions: Active The total number of institutions that used ACCESS resources.
Number of Jobs Ended The total number of ACCESS jobs that ended within the selected duration.
Number of Jobs Running The total number of ACCESS jobs that are running.
Number of Jobs Started The total number of ACCESS jobs that started executing within the selected duration.
Number of Jobs Submitted The total number of ACCESS jobs that submitted/queued within the selected duration.<i>
Number of Jobs via Gateway The total number of ACCESS jobs submitted through gateways (e.g., via a community user account) that ended within the selected duration.<br/><i>Job: </i>A scheduled process for a computer resource in a batch processing environment.
Number of PIs: Active The total number of PIs that used ACCESS resources.
Number of Resources: Active The total number of active ACCESS resources.
Number of Users: Active The total number of users that used ACCESS resources.
User Expansion Factor Gauging ACCESS job-turnaround time, it measures the ratio of wait time and the total time from submission to end of execution.<br/><i>User Expansion Factor = ((wait duration + wall duration) / wall duration). </i>
Wait Hours: Per Job The average time, in hours, a ACCESS job waits before execution on the designated resource.<br/><i>Wait Time: </i>Wait time is defined as the linear time between submission of a job by a user until it begins to execute.
Wait Hours: Total The total time, in hours, ACCESS jobs waited before execution on their designated resource.<br/><i>Wait Time: </i>Wait time is defined as the linear time between submission of a job by a user until it begins to execute.
Wall Hours: Per Job The average time, in hours, a job takes to execute.<br/>In timeseries view mode, the statistic shows the average wall time per job per time period. In aggregate view mode the statistic only includes the job wall hours between the defined time range. The wall hours outside the time range are not included in the calculation.<br /> <i>Wall Time:</i> Wall time is defined as the linear time between start and end time of execution for a particular job.
Wall Hours: Total The total time, in hours, ACCESS jobs took to execute.<br/><i>Wall Time:</i> Wall time is defined as the linear time between start and end time of execution for a particular job.
XD SUs Charged: Per Job

The average amount of XD SUs charged per ACCESS job.<br/> <i>XD SU: </i>1 XSEDE SU is defined as one CPU-hour on a Phase-1 DTF cluster.<br/> <i>SU - Service Units: </i>Computational resources on the XSEDE are allocated and charged in service units (SUs). SUs are defined locally on each system, with conversion factors among systems based on HPL benchmark results.<br/>

Current TeraGrid supercomputers have complex multi-core and memory hierarchies. Each resource has a specific configuration that determines the number (N) of cores that can be dedicated to a job without slowing the code (and other user and system codes). Each resource defines for its system the minimum number of SUs charged for a job running in the default batch queue, calculated as wallclock runtime multiplied by N. Minimum charges may apply.<br/>

Note: The actual charge will depend on the specific requirements of the job (e.g., the mapping of the cores across the machine, or the priority you wish to obtain).<br/>

Note 2: The SUs show here have been normalized against the XSEDE Roaming service. Therefore they are comparable across resources.
XD SUs Charged: Total

The total amount of XD SUs charged by ACCESS jobs.<br/> <i>XD SU: </i>1 XSEDE SU is defined as one CPU-hour on a Phase-1 DTF cluster.<br/> <i>SU - Service Units: </i>Computational resources on the XSEDE are allocated and charged in service units (SUs). SUs are defined locally on each system, with conversion factors among systems based on HPL benchmark results.<br/>

Current TeraGrid supercomputers have complex multi-core and memory hierarchies. Each resource has a specific configuration that determines the number (N) of cores that can be dedicated to a job without slowing the code (and other user and system codes). Each resource defines for its system the minimum number of SUs charged for a job running in the default batch queue, calculated as wallclock runtime multiplied by N. Minimum charges may apply.<br/>

Note: The actual charge will depend on the specific requirements of the job (e.g., the mapping of the cores across the machine, or the priority you wish to obtain).<br/>

Note 2: The SUs show here have been normalized against the XSEDE Roaming service. Therefore they are comparable across resources.

Dimension

A dimension is a grouping of data. You can use the describe_dimensions(realm) method to get a DataFrame containing the list of valid dimensions in the given realm. The realm must be passed in as a string.

with(dw,{
    dimensions <- dw$describe_dimensions('Jobs') })
dimensions %>% kable(format="html")
label description
None Summarizes jobs reported to the ACCESS allocations service (excludes non-ACCESS usage of the resource).
Allocation A funded project that is allowed to run jobs on resources.
Field of Science The field of science indicated on the allocation request pertaining to the running jobs.
Gateway A science gateway is a portal set up to aid submiting jobs to resources.
Grant Type A categorization of the projects/allocations.
Job Size A categorization of jobs into discrete groups based on the number of cores used by each job.
Job Wait Time A categorization of jobs into discrete groups based on the total linear time each job waited.
Job Wall Time A categorization of jobs into discrete groups based on the total linear time each job took to execute.
NSF Directorate The NSF directorate of the field of science indiciated on the allocation request pertaining to the running jobs.
Node Count A categorization of jobs into discrete groups based on node count.
PI The principal investigator of a project.
PI Institution Organizations that have PIs with allocations.
PI Institution Country The country of the institution of the PI of the project associated with compute jobs.
PI Institution State The location of the institution of the PI of the project associated with the compute jobs.
Parent Science The parent of the field of science indiciated on the allocation request pertaining to the running jobs.
Queue Queue pertains to the low level job queues on each resource.
Resource A resource is a remote computer that can run jobs.
Resource Type A categorization of resources into by their general capabilities.
Service Provider A service provider is an institution that hosts resources.
System Username The specific system username of the users who ran jobs.
User A person who is on a PIs allocation, hence able run jobs on resources.
User Institution Organizations that have users with allocations.
User Institution Country The name of the country of the institution of the person who ran the compute job.
User Institution State The location of the institution of the person who ran the compute job.
User NSF Status Categorization of the users who ran jobs.

Pass in realms, metrics, and dimensions using labels or IDs

For methods in the API that take realms, metrics, and/or dimensions as arguments, you can pass them in as their labels or their IDs.

with(dw, {
    data <- dw$get_data(
        duration='10 year',
        realm='Allocations',
        metric='NUs: Allocated', # 'allocated_nu' also works
        dimension='Resource Type',  # 'resource_type' also works
    )})
data

Filters

Filters allow you to include only data that have certain values for given dimensions. You can use the get_filter_values(realm, dimension) method to get a DataFrame containing the list of valid filter values for the given dimension in the given realm. The realm and dimension must be passed in as strings.

with(dw, {
    filter_values <- dw$get_filter_values('Jobs', 'Resource') }) # 'resource' also works
filter_values

For methods in the API that take filters as arguments, you must specify the filters as a dictionary in which the keys are dimensions (labels or IDs) and the values are string filter values (labels or IDs) or sequences of string filter values. For example, to return only data for which the field of science is biophysics and the resource is either NCSA Delta GPU or TACC Stampede2:

with(dw,{
    data <- dw$get_data(
        filters=list(
            'Field of Science'='Biophysics', # 'fieldofscience': '246' also works
            'Resource'= c( # 'resource' also works
                'NCSA DELTA GPU', # '3032' also works
                'STAMPEDE2 TACC' # '2825' also works
            )
        ),
    )})

data

Dataset Type

The dataset type can either be ‘timeseries’ (the default), in which data is grouped by a time aggregation unit, or ‘aggregate’, in which the data is aggregated across the entire duration. For ‘aggregate’, the results are returned as a Pandas Series rather than a DataFrame.

with(dw, {
    data <- dw$get_data(dataset_type='timeseries')})

data

Aggregation unit

The aggregation unit specifies how data is aggregated by time. You can get a list of valid aggregation units by calling the get_aggregation_units() method.

with(dw,{
    aggregation_units <- dw$get_aggregation_units()})

aggregation_units
## [[1]]
## [1] "Auto"
## 
## [[2]]
## [1] "Day"
## 
## [[3]]
## [1] "Month"
## 
## [[4]]
## [1] "Quarter"
## 
## [[5]]
## [1] "Year"

Additional Examples

For additional examples, please see the xdmod-notebooks repository.


XDMoD Data Analytics Framework v1.0.0